9 research outputs found

    Programming with human computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 151-156).Amazon's Mechanical Turk provides a programmatically accessible micro-task market, allowing a program to hire human workers. This has opened the door to a rich field of research in human computation where programs orchestrate the efforts of humans to help solve problems. This thesis explores challenges that programmers face in this space: both technical challenges like managing high-latency, as well as psychological challenges like designing effective interfaces for human workers. We offer tools and experiments to overcome these challenges in an effort to help future researchers better understand and harness the power of human computation. The main tool this thesis offers is the crash-and-rerun programming model for managing high-latency tasks on MTurk, along with the TurKit toolkit which implements crash-and-rerun. TurKit provides a straightforward imperative programming environment where MTurk is abstracted as a function call. Based on our experience using TurKit, we propose a simple model of human computation algorithms involving creation and decision tasks. These tasks suggest two natural workflows: iterative and parallel, where iterative tasks build on each other and parallel tasks do not. We run a series of experiments comparing the merits of each workflow, where iteration appears to increase quality, but has limitations like reducing the variety of responses and getting stuck in local maxima. Next we build a larger system composed of several iterative and parallel workflows to solve a real world problem, that of transcribing medical forms, and report our experience. The thesis ends with a discussion of the current state-of-the-art of human computation, and suggests directions for future work.by Greg Little.Ph.D

    Programming with keywords

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 105-108).Modern applications provide interfaces for scripting, but many users do not know how to write script commands. However, many users are familiar with the idea of entering keywords into a web search engine. Hence, if a user is familiar with the vocabulary of an application domain, they may be able to write a set of keywords expressing a command in that domain. For instance, in the web browsing domain, a user might enter the keywords click search button. This thesis presents several algorithms for translating keyword queries such as this directly into code. A prototype of this system in the web browsing domain translates click search button into the code click(findButton("search")). This code may then be executed in the context of a web browser to carry out the effect. Another prototype in the Java domain translates append message to log into log.append(message), given an appropriate context of local variables and imported classes. The algorithms and prototypes are evaluated with several studies, suggesting that users can write keyword queries with little or no instructions, and that the resulting translations are often accurate. This is especially true in small domains like the web, whereas in a large domain like Java, the accuracy is comparable to the accuracy of writing syntactically correct Java code without assistance.by Greg Little.S.M

    Real-time collaborative coding in a web IDE

    Get PDF
    This paper describes Collabode, a web-based Java integrated development environment designed to support close, synchronous collaboration between programmers. We examine the problem of collaborative coding in the face of program compilation errors introduced by other users which make collaboration more difficult, and describe an algorithm for error-mediated integration of program code. Concurrent editors see the text of changes made by collaborators, but the errors reported in their view are based only on their own changes. Editors may run the program at any time, using only error-free edits supplied so far, and ignoring incomplete or otherwise error-generating changes. We evaluate this algorithm and interface on recorded data from previous pilot experiments with Collabode, and via a user study with student and professional programmers. We conclude that it offers appreciable benefits over naive continuous synchronization without regard to errors and over manual version control.National Science Foundation (U.S.) (award IIS- 0447800

    TurKit: Human Computation Algorithms on Mechanical Turk

    Get PDF
    Mechanical Turk (MTurk) provides an on-demand source of human computation. This provides a tremendous opportunity to explore algorithms which incorporate human computation as a function call. However, various systems challenges make this difficult in practice, and most uses of MTurk post large numbers of independent tasks. TurKit is a toolkit for prototyping and exploring algorithmic human computation, while maintaining a straight-forward imperative programming style. We present the crash-and-rerun programming model that makes TurKit possible, along with a variety of applications for human computation algorithms. We also present case studies of TurKit used for real experiments across different fields.Xerox CorporationNational Science Foundation (U.S.) (Grant No. IIS- 0447800)Quanta ComputerMassachusetts Institute of Technology. Center for Collective Intelligenc

    Soylent: A Word Processor with a Crowd Inside

    Get PDF
    This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, com-plex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with prag-matics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pat-tern, which splits tasks into a series of generation and re-view stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of relia-bility, cost, wait time, and work time for edits.National Science Foundation (U.S.) (Grant No. IIS-0712793

    VizWiz

    Get PDF
    The lack of access to visual information like text labels, icons,and colors can cause frustration and decrease independence for blind people. Current access technology uses automatic approaches to address some problems in this space, but the technology is error-prone, limited in scope, and quite expensive. In this paper, we introduce VizWiz, a talking application for mobile phones that offers a new alternative to answering visual questions in nearly real-time—asking multiple people on the web. To support answering questions quickly, we introduce a general approach for intelligently recruiting human workers in advance called quikTurkit so that workers are available when new questions arrive. A field deployment with 11 blind participants illustrates that blind people can effectively use VizWiz to cheaply answer questions in their everyday lives, highlighting issues that automatic approaches will need to address to be useful. Finally, we illustrate the potential of using VizWiz as part of the participatory design of advanced tools by using it to build and evaluate VizWiz::LocateIt, an interactive mobile tool that helps blind people solve general visual search problems

    Sloppy programming

    No full text
    The essence of sloppy programming is that the user should be able to enter something simple and natural, such as a few keywords, and the computer should try everything within its power to interpret and make sense of this input. This chapter discusses several prototypes that implement sloppy programming, translating sloppy commands directly into executable code. It also describes the algorithms used in these prototypes, exposes their limitations, and proposes directions for future work. The techniques described in this discussion still just scratch the surface of a domain with great potential: translating sloppy commands into executable code. It has described potential benefits to end users and expert programmers alike, as well as advocated a continued need for textual command interfaces. A number of prototypes are discussed exploring this technology and what one can learn from them, including the fact that users can form commands for some of these systems without any training. Finally, it gave some high-level technical details about how to go about actually implementing sloppy translation algorithms, with some references for future reading.National Science Foundation (U.S.) (award number IIS-044780)Quanta Computer (Firm) (T-Party project

    Rewriting the Web with Chickenfoot

    No full text
    Unlike desktop applications, Web applications are much more exposed and open to modification. This chapter describes Chickenfoot, a programming system embedded in the Firefox Web browser, which enables end users to automate, customize, and integrate Web applications without examining their source code. One way Chickenfoot addresses this goal is a technique for identifying page components by keyword pattern matching. Web automation includes navigating pages, filling in forms, and clicking on links. For example, many conferences now use a Web site to receive papers, distribute them to reviewers, and collect the reviews. A reviewer assigned 10 papers must download each paper, print it, and (later) upload a review for it. Tedious repetition is a good argument for automation. While integrating multiple Web sites, the simplest kind of integration is just adding links from one site to another, but much richer integration is possible. Techniques are developed through studying how users name Web page components and present a heuristic keyword-matching algorithm that identifies the desired component from the user's name. It describes a range of applications that have been created using Chickenfoot and reflects on its advantages and limitations. © 2010 Elsevier Inc. All rights reserved.National Science Foundation (U.S.) (award number IIS-0447800)Quanta Computer Incorporated (T-Party project

    Risk of COVID-19 after natural infection or vaccinationResearch in context

    No full text
    Summary: Background: While vaccines have established utility against COVID-19, phase 3 efficacy studies have generally not comprehensively evaluated protection provided by previous infection or hybrid immunity (previous infection plus vaccination). Individual patient data from US government-supported harmonized vaccine trials provide an unprecedented sample population to address this issue. We characterized the protective efficacy of previous SARS-CoV-2 infection and hybrid immunity against COVID-19 early in the pandemic over three-to six-month follow-up and compared with vaccine-associated protection. Methods: In this post-hoc cross-protocol analysis of the Moderna, AstraZeneca, Janssen, and Novavax COVID-19 vaccine clinical trials, we allocated participants into four groups based on previous-infection status at enrolment and treatment: no previous infection/placebo; previous infection/placebo; no previous infection/vaccine; and previous infection/vaccine. The main outcome was RT-PCR-confirmed COVID-19 >7–15 days (per original protocols) after final study injection. We calculated crude and adjusted efficacy measures. Findings: Previous infection/placebo participants had a 92% decreased risk of future COVID-19 compared to no previous infection/placebo participants (overall hazard ratio [HR] ratio: 0.08; 95% CI: 0.05–0.13). Among single-dose Janssen participants, hybrid immunity conferred greater protection than vaccine alone (HR: 0.03; 95% CI: 0.01–0.10). Too few infections were observed to draw statistical inferences comparing hybrid immunity to vaccine alone for other trials. Vaccination, previous infection, and hybrid immunity all provided near-complete protection against severe disease. Interpretation: Previous infection, any hybrid immunity, and two-dose vaccination all provided substantial protection against symptomatic and severe COVID-19 through the early Delta period. Thus, as a surrogate for natural infection, vaccination remains the safest approach to protection. Funding: National Institutes of Health
    corecore